14 research outputs found

    Interval Principal Component Analysis and Its Application to Fault Detection and Data Classification

    Get PDF
    Principal Component Analysis (PCA) is a linear data analysis tool that aims to reduce the dimensionality of a dataset, while retaining most of the variation found in it. It transforms the variables of a dataset into a new set of variables, called principal components, using linear combinations of the original variables. PCA is a powerful statistical technique used in research for fault detection, classification and feature extraction. Interval Principal Component Analysis (IPCA) is an extension to PCA designed to apply PCA to large datasets using interval data generated from single-valued samples. In this thesis, three IPCA methods are introduced: Centers IPCA (CIPCA), Midpoint-Radii IPCA (MRIPCA), and Symbolic Covariance IPCA (SCIPCA). In addition, the methods and parameters used for fault detection and classification applications are described for classical and interval data. The performance of the methods used for interval generation in IPCA are analyzed under different conditions. Moreover, three synthetic datasets were used to test the fault detection performances of all methods, and three real datasets were used to test their classification performances. The results show that IPCA methods have a higher detection rate than classical PCA on average, for the same false alarm rate. Moreover, unlike PCA, IPCA methods are capable of accurately differentiating the type of fault. Interval centers were capable of detecting changes in mean, while interval radii were capable of detecting changes in variance. On the other hand, for data classification, the results show that MRIPCA has the highest precision on average than other methods

    Fault Detection of Single and Interval Valued Data Using Statistical Process Monitoring Techniques

    Get PDF
    Principal component analysis (PCA) is a linear data analysis technique widely used for fault detection and isolation, data modeling, and noise filtration. PCA may be combined with statistical hypothesis testing methods, such as the generalized likelihood ratio (GLR) technique in order to detect faults. GLR functions by using the concept of maximum likelihood estimation (MLE) in order to maximize the detection rate for a fixed false alarm rate. The benchmark Tennessee Eastman Process (TEP) is used to examine the performance of the different techniques, and the results show that for processes that experience both shifts in the mean and/or variance, the best performance is achieved by independently monitoring the mean and variance using two separate GLR charts, rather than simultaneously monitoring them using a single chart. Moreover, single-valued data can be aggregated into interval form in order to provide a more robust model with improved fault detection performance using PCA and GLR. The TEP example is used once more in order to demonstrate the effectiveness of using of interval-valued data over single-valued data

    First substantiated record of slender goby Gobius geniporus (Osteichthyes: Gobiidae) from the Syrian coast (Eastern Mediterranean Sea)

    Get PDF
    Primjerak glavoča bjelaša Gobius geniporus Valenciennes, 1837, ulovljen je 21. veljače 2021. sa sirijske obale na dubini od 13 metara. Ovaj nalaz imao je ukupnu duljinu 87 mm i težio je 6,89 g. Sadašnji nalaz predstavlja prvi zapis o G. geniporus sa sirijske obale i potvrđuje pojavu vrste u slivu Levanta, istočnoj regiji Sredozemnog mora.A specimen of slender goby Gobius geniporus Valenciennes, 1837 was caught on 21 February 2021, from the Syrian coast at a depth of 13 m. This specimen measured 87 mm in total length and weighed 6.89 g. The present finding represents the first record of G. geniporus from the Syrian coast and confirms the species occurrence in the Levant Basin, eastern region from the Mediterranean Sea

    Interval Principal Component Analysis and Its Application to Fault Detection and Data Classification

    Get PDF
    Principal Component Analysis (PCA) is a linear data analysis tool that aims to reduce the dimensionality of a dataset, while retaining most of the variation found in it. It transforms the variables of a dataset into a new set of variables, called principal components, using linear combinations of the original variables. PCA is a powerful statistical technique used in research for fault detection, classification and feature extraction. Interval Principal Component Analysis (IPCA) is an extension to PCA designed to apply PCA to large datasets using interval data generated from single-valued samples. In this thesis, three IPCA methods are introduced: Centers IPCA (CIPCA), Midpoint-Radii IPCA (MRIPCA), and Symbolic Covariance IPCA (SCIPCA). In addition, the methods and parameters used for fault detection and classification applications are described for classical and interval data. The performance of the methods used for interval generation in IPCA are analyzed under different conditions. Moreover, three synthetic datasets were used to test the fault detection performances of all methods, and three real datasets were used to test their classification performances. The results show that IPCA methods have a higher detection rate than classical PCA on average, for the same false alarm rate. Moreover, unlike PCA, IPCA methods are capable of accurately differentiating the type of fault. Interval centers were capable of detecting changes in mean, while interval radii were capable of detecting changes in variance. On the other hand, for data classification, the results show that MRIPCA has the highest precision on average than other methods
    corecore